50 research outputs found
Learning Generative Models across Incomparable Spaces
Generative Adversarial Networks have shown remarkable success in learning a
distribution that faithfully recovers a reference distribution in its entirety.
However, in some cases, we may want to only learn some aspects (e.g., cluster
or manifold structure), while modifying others (e.g., style, orientation or
dimension). In this work, we propose an approach to learn generative models
across such incomparable spaces, and demonstrate how to steer the learned
distribution towards target properties. A key component of our model is the
Gromov-Wasserstein distance, a notion of discrepancy that compares
distributions relationally rather than absolutely. While this framework
subsumes current generative models in identically reproducing distributions,
its inherent flexibility allows application to tasks in manifold learning,
relational learning and cross-domain learning.Comment: International Conference on Machine Learning (ICML
Probabilistic Bias Mitigation in Word Embeddings
It has been shown that word embeddings derived from large corpora tend to
incorporate biases present in their training data. Various methods for
mitigating these biases have been proposed, but recent work has demonstrated
that these methods hide but fail to truly remove the biases, which can still be
observed in word nearest-neighbor statistics. In this work we propose a
probabilistic view of word embedding bias. We leverage this framework to
present a novel method for mitigating bias which relies on probabilistic
observations to yield a more robust bias mitigation algorithm. We demonstrate
that this method effectively reduces bias according to three separate measures
of bias while maintaining embedding quality across various popular benchmark
semantic tasksComment: 4 pages, 4 figures, Workshop on Human-Centric Machine Learning at
NeurIPS 201